brain-computer system
The speech age
Researchers at MIT have developed a new approach to training speech recognition systems that does not depend on transcriptions – as is the current model. Instead, their system analyses correspondences between images and spoken descriptions of those images, as captured in a large collection of audio recordings. The system then learns a mapping between acoustic features of the recordings correlated with image characteristics. Traditionally speech recognition systems such as those that convert speech to text on smartphones are the result of machine learning systems that go over many thousands of utterances and their transcriptions to learn a mapping between acoustic features and words. While this method works quite well, the requirement of professional grade transcription is costly and time-consuming.
Elon Musk Sees Brain-Computer Systems in Humans' Future
Is human intelligence heading toward a cyborg-dominated future? In a recent tweet, the Tesla and SpaceX CEO teased that a brain-computer system that links human brains to a computer interface -- a "neural lace" -- may be announced early this year, reported TechCrunch. Musk first mentioned the neural lace concept (the addition of a digital layer of intelligence to the human brain) at Recode's Code Conference last year. The brain-computer system would create a"symbiosis with machines," Musk said, according to TechCrunch. "We're already a cyborg -- I mean, you have a digital or partial version of yourself in the form of your emails and your social media and all the things that you do, and you have basically superpowers with your computer and your phone and the applications that are there," Musk said at the conference.